6 research outputs found
Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion
Driven by the great success of deep convolutional neural networks (CNNs) that are currently used by quite a few computer vision applications, we extend the usability of visual-based CNNs into the synthetic aperture radar (SAR) data domain without employing transfer learning. Our SAR automatic target recognition (ATR) architecture efficiently extends the pretrained Visual Geometry Group CNN from the visual domain into the X-band SAR data domain by clustering its neuron layers, bridging the visualâSAR modality gap by fusing the features extracted from the hidden layers, and by employing a local feature matching scheme. Trials on the moving and stationary target acquisition dataset under various setups and nuisances demonstrate a highly appealing ATR performance gaining 100% and 99.79% in the 3-class and 10-class ATR problem, respectively. We also confirm the validity, robustness, and conceptual coherence of the proposed method by extending it to several state-of-the-art CNNs and commonly used local feature similarity/match metrics
Histogram of distances for local surface description
3D object recognition is proven superior compared to its 2D counterpart with numerous implementations, making it a current research topic. Local based proposals specifically, although being quite accurate, they limit their performance on the stability of their local reference frame or axis (LRF/A) on which the descriptors are defined. Additionally, extra processing time is demanded to estimate the LRF for each local patch. We propose a 3D descriptor which overrides the necessity of a LRF/A reducing dramatically processing time needed. In addition robustness to high levels of noise and non-uniform subsampling is achieved. Our approach, namely Histogram of Distances is based on multiple L2-norm metrics of local patches providing a simple and fast to compute descriptor suitable for time-critical applications. Evaluation on both high and low quality popular point clouds showed its promising performance
3D automatic target recognition for missile platforms
The quest for military Automatic Target Recognition (ATR) procedures arises from the demand to reduce collateral damage and fratricide. Although missiles with two-dimensional ATR capabilities do exist, the potential of future Light Detection and Ranging (LIDAR) missiles with three-dimensional (3D) ATR abilities shall significantly improve the missileâs effectiveness in complex battlefields. This is because 3D ATR can encode the targetâs underlying structure and thus reinforce target recognition. However, the current military grade 3D ATR or military applied computer vision algorithms used for object recognition do not pose optimum solutions in the context of an ATR capable LIDAR based missile, primarily due to the computational and memory (in terms of storage) constraints that missiles impose. Therefore, this research initially introduces a 3D descriptor taxonomy for the Local and the Global descriptor domain, capable of realising the processing cost of each potential option. Through these taxonomies, the optimum missile oriented descriptor per domain is identified that will further pinpoint the research route for this thesis. In terms of 3D descriptors that are suitable for missiles, the contribution of this thesis is a 3D Global based descriptor and four 3D Local based descriptors namely the SURF Projection recognition (SPR), the Histogram of Distances (HoD), the processing efficient variant (HoD-S) and the binary variant B-HoD. These are challenged against current state-of-the-art 3D descriptors on standard commercial datasets, as well as on highly credible simulated air-to-ground missile engagement scenarios that consider various platform parameters and nuisances including simulated scale change and atmospheric disturbances. The results obtained over the different datasets showed an outstanding computational improvement, on average x19 times faster than state-of-the-art techniques in the literature, while maintaining or even improving on some occasions the detection rate to a minimum of 90% and over of correct classified targets
Recommended from our members
Evaluating 3D local descriptors for future LIDAR missiles with automatic target recognition capabilities
Future light detection and ranging seeker missiles incorporating 3D automatic target recognition (ATR) capabilities can improve the missileâs effectiveness in complex battlefield environments. Considering the progress of local 3D descriptors in the computer vision domain, this paper evaluates a number of these on highly credible simulated air-to-ground missile engagement scenarios. The latter take into account numerous parameters that have not been investigated yet by the literature including variable missile â target range, 6-degrees-of-freedom missile motion and atmospheric disturbances. Additionally, the evaluation process utilizes our suggested 3D ATR architecture that compared to current pipelines involves more post-processing layers aiming at further enhancing 3D ATR performance. Our trials reveal that computer vision algorithms are appealing for missile-oriented 3D ATR
SAR automatic target recognition based on convolutional neural networks
We propose a multi-modal multi-discipline strategy appropriate for Automatic Target Recognition (ATR) on Synthetic Aperture Radar (SAR) imagery. Our architecture relies on a pre-trained, in the RGB domain, Convolutional Neural Network that is innovatively applied on SAR imagery, and is combined with multiclass Support Vector Machine classification. The multi-modal aspect of our architecture enforces the generalisation capabilities of our proposal, while the multi-discipline aspect bridges the modality gap. Even though our technique is trained in a single depression angle of 17°, average performance on the MSTAR database over a 10-class target classification problem in 15°, 30° and 45° depression is 97.8%. This multi-target and multi-depression ATR capability has not been reported yet in the MSTAR database literature
Recommended from our members
Hâ LIDAR odometry for spacecraft relative navigation
Current light detection and ranging (LIDAR) based odometry solutions that are used for spacecraft relative navigation suffer from quite a few deficiencies. These include an off-line training requirement and relying on the iterative closest point (ICP) that does not guarantee a globally optimum solution. To encounter this, the authors suggest a robust architecture that overcomes the problems of current proposals by combining the concepts of 3D local feature matching with an adaptive variant of the Hâ recursive filtering process. Trials on real laser scans of an EnviSat model demonstrate that the proposed architecture affords at least one order of magnitude better accuracy compared to ICP